Goto

Collaborating Authors

 adversarial noise









Robust Mean Estimation under Quantization

Abdalla, Pedro, Chen, Junren

arXiv.org Machine Learning

Parameter estimation under quantization is a fundamental problem at the intersection of statistics [12], signal processing [14], and machine learning [4]. Quantization is of fundamental importance in these fields, mainly due to its role in reducing memory and storage costs. In distributed learning, quantization plays a key role to reduce the cost of communication between data servers, where the central server has to estimate a parameter from the quantized data sent by the data servers. As pointed out in [17], quantization also contributes to the recent paradigm of data privacy, since the quantized sample preserves sensitive information: the estimator only have access to bits, which reduces the chance to leak sensitive information. In this work, we consider what is arguably the most fundamental parameter estimation task: the estimation of the mean of a random vector X from n i.i.d.


Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attack

Neural Information Processing Systems

A powerful category of (invisible) data poisoning attacks modify a subset of training examples by small adversarial perturbations to change the prediction of certain test-time data. Existing defense mechanisms are not desirable to deploy in practice, as they ofteneither drastically harm the generalization performance, or are attack-specific, and prohibitively slow to apply. Here, we propose a simple but highly effective approach that unlike existing methods breaks various types of invisible poisoning attacks with the slightest drop in the generalization performance. We make the key observation that attacks introduce local sharp regions of high training loss, which when minimized, results in learning the adversarial perturbations and makes the attack successful. To break poisoning attacks, our key idea is to alleviate the sharp loss regions introduced by poisons. To do so, our approach comprises two components: an optimized friendly noise that is generated to maximally perturb examples without degrading the performance, and a randomly varying noise component. The combination of both components builds a very light-weight but extremely effective defense against the most powerful triggerless targeted and hidden-trigger backdoor poisoning attacks, including Gradient Matching, Bulls-eye Polytope, and Sleeper Agent. We show that our friendly noise is transferable to other architectures, and adaptive attacks cannot break our defense due to its random noise component.


Distributed Zero-Order Optimization under Adversarial Noise

Neural Information Processing Systems

We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network. We propose a distributed zero-order projected gradient descent algorithm to solve the problem. Exchange of information within the network is permitted only between neighbouring nodes. An important feature of our procedure is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter, and smoothness properties of the local objectives. The bounds indicate some key improvements of our method over the state-of-the-art, both in the distributed and standard zero-order optimization settings.